queen mary university
China's AI is quietly making big inroads in Silicon Valley
China's AI is quietly making big inroads in Silicon Valley China's AI models are quickly gaining traction in Silicon Valley, becoming integral to the operations of American companies and earning the praise of a growing list of tech leaders. Their rapid ascent has highlighted the competitive edge that Chinese developers such as Alibaba, Z.ai, Moonshot, and MiniMax have been able to gain by offering so-called "open" language models at much lower costs than their rivals in the United States. Airbnb CEO Brian Chesky generated headlines in October when he revealed that the short-term rental platform had opted for Alibaba's Qwen over OpenAI's ChatGPT, praising the Chinese model as "fast and cheap". Social Capital CEO Chamath Palihapitiya revealed the same month that his company had migrated much of its work to Moonshot's Kimi K2 as it was "way more performant" and "a ton cheaper" than models from OpenAI and Anthropic. Programmers on social media also recently highlighted evidence that two popular US-developed coding assistants, Composer and Windsurf, were built on Chinese models.
- North America > United States > California (0.83)
- Asia > China > Beijing > Beijing (0.06)
- South America > Brazil (0.05)
- (7 more...)
- Information Technology (1.00)
- Government (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
Modulation Discovery with Differentiable Digital Signal Processing
Mitcheltree, Christopher, Tan, Hao Hao, Reiss, Joshua D.
Modulations are a critical part of sound design and music production, enabling the creation of complex and evolving audio. Modern synthesizers provide envelopes, low frequency oscillators (LFOs), and more parameter automation tools that allow users to modulate the output with ease. However, determining the modulation signals used to create a sound is difficult, and existing sound-matching / parameter estimation systems are often uninterpretable black boxes or predict high-dimensional framewise parameter values without considering the shape, structure, and routing of the underlying modulation curves. We propose a neural sound-matching approach that leverages modulation extraction, constrained control signal parameterizations, and differentiable digital signal processing (DDSP) to discover the modulations present in a sound. We demonstrate the effectiveness of our approach on highly modulated synthetic and real audio samples, its applicability to different DDSP synth architectures, and investigate the trade-off it incurs between interpretability and sound-matching accuracy. We make our code and audio samples available and provide the trained DDSP synths in a VST plugin.
- Europe > United Kingdom > England > Greater London > London (0.40)
- Asia > Singapore (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
RAVE: Retrieval and Scoring Aware Verifiable Claim Detection
ABSTRACT The rapid spread of misinformation on social media underscores the need for scalable fact-checking tools. A key step is claim detection, which identifies statements that can be objectively verified. Prior approaches often rely on linguistic cues or claim check-worthiness, but these struggle with vague political discourse and diverse formats such as tweets. We present RA VE (Retrieval and Scoring A ware V erifiable Claim Detection), a framework that combines evidence retrieval with structured signals of relevance and source credibility. Experiments on CT22-test and PoliClaim-test show that RA VE consistently outperforms text-only and retrieval-based baselines in both accuracy and F1.
- Health & Medicine (0.72)
- Media > News (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.47)
Play Style Identification Using Low-Level Representations of Play Traces in MicroRTS
Xia, Ruizhe Yu, Gow, Jeremy, Lucas, Simon
Play style identification can provide valuable game design insights and enable adaptive experiences, with the potential to improve game playing agents. Previous work relies on domain knowledge to construct play trace representations using handcrafted features. More recent approaches incorporate the sequential structure of play traces but still require some level of domain abstraction. In this study, we explore the use of unsupervised CNN-LSTM autoencoder models to obtain latent representations directly from low-level play trace data in MicroRTS. We demonstrate that this approach yields a meaningful separation of different game playing agents in the latent space, reducing reliance on domain expertise and its associated biases. This latent space is then used to guide the exploration of diverse play styles within studied AI players.
- Europe > United Kingdom > England > Greater London > London (0.41)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Asia > China > Beijing > Beijing (0.04)
Robot Talk Episode 113 – Soft robotic hands, with Kaspar Althoefer
Kaspar Althoefer is Director of the Centre for Advanced Robotics at Queen Mary University of London (QMUL). His research focuses on soft robotics, tactile perception, intelligent manipulation, and machine learning techniques for sensor signal interpretation. His research advancements have significant applications in robot-assisted minimally invasive surgery, rehabilitation, assistive technologies, and human-robot interactions within a range of scenarios, including manufacturing. Before joining QMUL, he was a Professor at King's College London, where he also earned his PhD.
Explainable AI: Definition and attributes of a good explanation for health AI
Kyrimi, Evangelia, McLachlan, Scott, Wohlgemut, Jared M, Perkins, Zane B, Lagnado, David A., Marsh, William, Group, the ExAIDSS Expert
Proposals of artificial intelligence (AI) solutions based on increasingly complex and accurate predictive models are becoming ubiquitous across many disciplines. As the complexity of these models grows, transparency and users' understanding often diminish. This suggests that accurate prediction alone is insufficient for making an AI-based solution truly useful. In the development of healthcare systems, this introduces new issues related to accountability and safety. Understanding how and why an AI system makes a recommendation may require complex explanations of its inner workings and reasoning processes. Although research on explainable AI (XAI) has significantly increased in recent years and there is high demand for XAI in medicine, defining what constitutes a good explanation remains ad hoc, and providing adequate explanations continues to be challenging. To fully realize the potential of AI, it is critical to address two fundamental questions about explanations for safety-critical AI applications, such as health-AI: (1) What is an explanation in health-AI? and (2) What are the attributes of a good explanation in health-AI? In this study, we examined published literature and gathered expert opinions through a two-round Delphi study. The research outputs include (1) a definition of what constitutes an explanation in health-AI and (2) a comprehensive list of attributes that characterize a good explanation in health-AI.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Europe > Germany > Berlin (0.14)
- Europe > United Kingdom > England > Greater London > London (0.06)
- (11 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.48)
- Law (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Information Technology > Security & Privacy (0.68)
- (3 more...)
New Artificial Intelligence Tool Predicts When a Bank Should Be Bailed Out by Taxpayers
An artificial intelligence tool could help governments decide whether or not to bail out a bank in crisis by predicting if the intervention will save money for taxpayers in the long term. The AI tool, developed by researchers at University College London (UCL) and Queen Mary University of London, assesses not only if a bailout is the best strategy for taxpayers, but also suggests how much should be invested in the bank, and which bank or banks should be bailed out at any given time. It is detailed in a new paper to be published today (November 17) in the journal Nature Communications. Using data from the European Banking Authority, the algorithm was tested by the authors on a network of 35 European financial institutions judged to be the most important to the global financial system. However, it can also be used and calibrated by national banks using detailed proprietary data unavailable to the public.
Do bees play? A groundbreaking study says yes.
Many animals like to play, often for no other apparent reason than enjoyment. Pet owners know this is true for cats, dogs, even rodents--and scientists have observed the same in some fish, frogs, lizards, and birds. Are their minds and lives rich enough to make room for play? New research published in the journal Animal Behaviour suggests that bumblebees seem to enjoy rolling around wooden balls, without being trained or receiving rewards--presumably just because it's fun. "It shows that bees are not little robots that just respond to stimuli… and they do carry out activities that might be pleasurable," says lead author Samadi Galpayage, a researcher at the Queen Mary University of London.
- North America > United States > Tennessee (0.05)
- Europe > United Kingdom > England > Cornwall > Newquay (0.05)
- Europe > Germany > Saxony > Leipzig (0.05)
The Dark Secret Behind Those Cute AI-generated Animal Images - AI Summary
It's no secret that large models, such as DALL-E 2 and Imagen, trained on vast numbers of documents and images taken from the web, absorb the worst aspects of that data as well as the best. Scroll down the Imagen website--past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses--to the section on societal impact and you get this: "While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. It's the same kind of acknowledgement that OpenAI made when it revealed GPT-3 in 2019: "internet-trained models have internet-scale biases." And as Mike Cook, who researches AI creativity at Queen Mary University of London, has pointed out, it's in the ethics statements that accompanied Google's large language model PaLM and OpenAI's DALL-E 2. In short, these firms know that their models are capable of producing awful content, and they have no idea how to fix that. It's no secret that large models, such as DALL-E 2 and Imagen, trained on vast numbers of documents and images taken from the web, absorb the worst aspects of that data as well as the best. Scroll down the Imagen website--past the dragon fruit wearing a karate belt and the small cactus wearing a hat and sunglasses--to the section on societal impact and you get this: "While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized [the] LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.
New AI tool prescribes best treatment for liver cancer
Researchers at King's College Hospital and Queen Mary University of London have developed an AI algorithm which can prescribe the most effective treatment plan for patients diagnosed with primary liver cancer. The computer-based algorithm, named Drug Ranking Using Machine Learning (DRUML), classifies drugs used to treat bile duct cancer (a type of primary liver cancer), based on their efficacy in reducing cancer cell growth. The research into DRUML was recently published in Cancer Research, an American Association of Cancer Research journal. Researchers say that the software could be used in the future to predict individual patient responses to therapies to enable them to select the most effective treatment plan. Professor Pedro Cutillas, researcher at Queen Mary University of London, said: "Patients who are diagnosed with primary liver cancer often have a very poor prognosis. Hence why a one-size-fits-all approach to treatment is not the most effective way to reduce cancer cell growth and why we applied DRUML to this type of cancer."